Conversation
- Update backend, frontend, and root package.json versions to 0.7.0 - Add classes array to NodeHieraData and NodeHieraDataResponse types - Modify classifyKeyUsage method to return included classes alongside used/unused keys - Replace getNodeCatalog with getNodeResources for more reliable class extraction - Extract Class resources directly from resourcesByType instead of filtering catalog - Update HieraService to include classes in node Hiera data response - Include classes in API response for NodeHieraTab component consumption - Improve logging to show all classes found instead of just examples - Add warning log when no Class resources are found for debugging purposes - Update Navigation component to display v0.7.0 version number
- Fix date range calculation to include today in the requested number of days instead of adding an extra day - Change end date to current moment instead of end of day to show partial data for today - Adjust start date calculation from `days` to `days - 1` to properly include today in the count - Normalize end date to start of day when comparing dates to prevent off-by-one errors - Add debug logging for date range and converted history length - Update all test expectations to reflect correct day counts (e.g., 7 days returns 7 days, not 8) - Add found/not-found filter state to NodeHieraTab component for better data filtering - This fixes the inconsistency where requesting N days of history would return N+1 days
- Implemented AnsiblePlaybookInterface for executing playbooks with real-time output. - Created AnsibleSetupGuide for configuring Ansible integration. - Updated ExecutionList to display execution tool used (Bolt or Ansible). - Enhanced PackageInstallInterface to support Ansible as an execution tool. - Modified ReExecutionButton to handle execution tool context. - Updated IntegrationBadge to include Ansible. - Added integration colors for Ansible in integrationColors.svelte.ts. - Updated NodeDetailPage and IntegrationSetupPage to support Ansible integration. - Enhanced command execution in NodeDetailPage to allow selection of execution tool.
- Implement InformationSourcePlugin interface for Ansible plugin - Add getInventory() method to retrieve managed nodes from Ansible - Add getNodeFacts() method to gather system facts via ansible setup module - Enhance health checks to verify ansible-inventory binary availability - Update plugin type from "execution" to "both" to support dual capabilities - Add Facts and Node type imports from bolt types - Convert Ansible facts to Bolt-compatible format for unified fact representation - Update HomePage and InventoryPage to display Ansible-sourced inventory data - Update server configuration to register Ansible as information source
There was a problem hiding this comment.
Pull request overview
This pull request introduces Ansible as a new execution integration alongside Bolt, bumping the version from 0.6.0 to 0.7.0. It adds the ability to execute commands, install packages, and run playbooks using Ansible as an alternative to Bolt. The PR also includes improvements to Hiera functionality with class-aware key classification and fixes to the Puppet run history service date range logic.
Changes:
- Adds Ansible integration with AnsibleService and AnsiblePlugin supporting command execution, package management, and playbook execution
- Introduces tool selection UI in frontend allowing users to choose between Bolt and Ansible for command and package operations
- Adds execution_tool column to database schema to track which tool (bolt/ansible) was used for each execution
- Reorganizes code by moving Bolt types from
backend/src/bolt/types.tstobackend/src/integrations/bolt/types.ts - Improves Hiera integration to use class-aware key classification based on catalog data
- Fixes Puppet run history date range calculation (now correctly returns N days including today, not N+1 days)
Reviewed changes
Copilot reviewed 69 out of 71 changed files in this pull request and generated 20 comments.
Show a summary per file
| File | Description |
|---|---|
| package.json, package-lock.json, frontend/package.json, backend/package.json | Version bump to 0.7.0 across all packages |
| frontend/src/components/Navigation.svelte | Version display updated to v0.7.0 |
| frontend/src/components/AnsibleSetupGuide.svelte | New component providing setup instructions for Ansible integration |
| frontend/src/components/AnsiblePlaybookInterface.svelte | New component for executing Ansible playbooks from node detail page |
| frontend/src/pages/NodeDetailPage.svelte | Adds tool selection dropdown and Ansible playbook interface |
| frontend/src/components/PackageInstallInterface.svelte | Adds tool selection between Bolt and Ansible for package operations |
| frontend/src/lib/integrationColors.svelte.ts | Adds Ansible color configuration (blue #1A4D8F) |
| frontend/src/components/IntegrationBadge.svelte | Adds Ansible label mapping |
| frontend/src/components/ExecutionList.svelte | Adds Tool column to execution list displaying execution_tool |
| frontend/src/components/ReExecutionButton.svelte | Stores executionTool in session storage for re-execution |
| frontend/src/pages/InventoryPage.svelte | Adds Ansible source name mapping |
| frontend/src/pages/IntegrationSetupPage.svelte | Adds route for Ansible setup guide |
| frontend/src/pages/HomePage.svelte | Updates branding to "Puppet Ansible Bolt Awesome Web Interface" |
| frontend/src/pages/ExecutionsPage.svelte | Adds executionTool field to execution interface |
| frontend/src/components/NodeHieraTab.svelte | Refactors filters, adds classes display, removes old classification mode |
| backend/src/integrations/ansible/AnsibleService.ts | New service implementing Ansible CLI execution for commands, packages, playbooks, and inventory |
| backend/src/integrations/ansible/AnsiblePlugin.ts | New plugin implementing ExecutionToolPlugin and InformationSourcePlugin for Ansible |
| backend/src/routes/playbooks.ts | New router for Ansible playbook execution |
| backend/src/routes/commands.ts | Adds tool parameter and tool selection logic |
| backend/src/routes/packages.ts | Adds tool parameter and conditional execution based on selected tool |
| backend/src/routes/integrations/status.ts | Adds Ansible status check |
| backend/src/server.ts | Registers Ansible plugin with IntegrationManager |
| backend/src/config/schema.ts, ConfigService.ts | Adds Ansible configuration schema and parsing |
| backend/src/database/schema.sql, migrations.sql | Adds execution_tool column to executions table |
| backend/src/database/ExecutionRepository.ts | Adds executionTool field handling |
| backend/src/integrations/bolt/types.ts | Moved from backend/src/bolt/types.ts (reorganization) |
| backend/src/integrations/bolt/BoltService.ts, BoltPlugin.ts | Updated imports after types reorganization |
| backend/src/integrations/hiera/HieraService.ts | Implements class-aware key classification using catalog classes |
| backend/src/integrations/hiera/types.ts | Adds classes field to NodeHieraData |
| backend/src/services/PuppetRunHistoryService.ts | Fixes date range calculation to correctly include today |
| backend/test/**/*.ts | Updates imports to reflect Bolt types reorganization |
| backend/test/services/PuppetRunHistoryService.test.ts | Updates test expectations for corrected date range logic |
| docs/integrations/ansible.md | New comprehensive Ansible integration setup guide |
| docs/configuration.md | Adds Ansible configuration section |
| README.md | Updates description, prerequisites, version history with Ansible integration |
| backend/.env.example, .env.example | Adds Ansible environment variable examples |
| const args = [ | ||
| nodeId, | ||
| "-i", | ||
| this.inventoryPath, | ||
| "-m", | ||
| "package", | ||
| "-a", | ||
| JSON.stringify(moduleArgs), | ||
| ]; |
There was a problem hiding this comment.
The Ansible package module args are passed as JSON.stringify(moduleArgs) to the -a flag. However, Ansible's ad-hoc command syntax typically expects module arguments in a different format (key=value pairs). The correct format should be either using --args with proper quoting, or converting the moduleArgs object to key=value format. For example, the command should look like: ansible host -m package -a "name=curl state=present" not ansible host -m package -a '{"name":"curl","state":"present"}'. This will likely cause Ansible to fail with a parsing error.
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
| {#if executionStream && expertMode.enabled && (executionStream.executionStatus === 'running' || executionStream.isConnecting)} | ||
| <div> | ||
| <h3 class="mb-2 text-sm font-medium text-gray-700 dark:text-gray-300">Real-time Output:</h3> | ||
| <RealtimeOutputViewer stream={executionStream} executionId={currentExecutionId} autoConnect={false} /> |
There was a problem hiding this comment.
Same issue here - passing an empty string when currentExecutionId is falsy. The component should either handle empty executionId properly or this component should only be rendered when there's a valid executionId available.
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
|
|
||
| -- Migration: Add execution_tool column to indicate which execution engine was used | ||
| -- Values: bolt, ansible | ||
| ALTER TABLE executions ADD COLUMN execution_tool TEXT DEFAULT 'bolt'; |
There was a problem hiding this comment.
The migration adds the execution_tool column without a CHECK constraint, while the schema.sql file (line 19) includes CHECK(execution_tool IN ('bolt', 'ansible')). This mismatch means that existing databases upgraded via migration won't have the constraint enforced, while fresh installations will. Consider adding the CHECK constraint in the migration as well: ALTER TABLE executions ADD COLUMN execution_tool TEXT DEFAULT 'bolt' CHECK(execution_tool IN ('bolt', 'ansible'));
| ALTER TABLE executions ADD COLUMN execution_tool TEXT DEFAULT 'bolt'; | |
| ALTER TABLE executions ADD COLUMN execution_tool TEXT DEFAULT 'bolt' CHECK(execution_tool IN ('bolt', 'ansible')); |
| @@ -160,6 +160,7 @@ export interface NodeHieraData { | |||
| usedKeys: Set<string>; | |||
| unusedKeys: Set<string>; | |||
| hierarchyFiles: HierarchyFileInfo[]; | |||
There was a problem hiding this comment.
The classes field is added to the interface but it's optional. When no classes are found (e.g., no catalog available), this field is set to an empty array [] in the HieraService. However, the frontend NodeHieraTab component checks if (hieraData.classes && hieraData.classes.length > 0) which means an empty array will render the "No catalog classes found" warning. This is correct behavior, but consider documenting in the type definition that an empty array indicates "catalog was checked but no classes found" versus undefined meaning "classes not available".
| hierarchyFiles: HierarchyFileInfo[]; | |
| hierarchyFiles: HierarchyFileInfo[]; | |
| /** | |
| * Catalog classes associated with this node. | |
| * | |
| * Semantics: | |
| * - `undefined`: Classes information is not available (e.g., catalog lookup | |
| * was not performed, failed, or is not supported for this node). | |
| * - `[]` (empty array): Catalog was successfully checked, but no classes | |
| * were found for this node. | |
| * | |
| * The frontend relies on this distinction to decide whether to show a | |
| * "No catalog classes found" warning (for an empty array) versus omitting | |
| * classes-related UI when the field is `undefined`. | |
| */ |
| const target = Array.isArray(action.target) | ||
| ? action.target[0] | ||
| : action.target; | ||
|
|
||
| if (!target) { | ||
| throw new Error("No target specified for action"); | ||
| } |
There was a problem hiding this comment.
The executeAction method only takes the first target when action.target is an array (action.target[0]). While this is documented in error messages, Ansible actually supports running commands against multiple hosts. The current implementation silently ignores additional targets in the array. Consider either: 1) explicitly documenting this limitation and throwing an error when multiple targets are provided, or 2) implementing proper multi-target support by using Ansible patterns or looping through targets.
| if (selectedTool === 'bolt' && !selectedTask) { | ||
| validationError = 'Please select a package task'; | ||
| return false; | ||
| } |
There was a problem hiding this comment.
The validation logic if (selectedTool === 'bolt' && !selectedTask) on line 114 should also check if selectedTask is an empty string, not just falsy. An empty string would pass this check but fail to find a valid task. Consider using if (selectedTool === 'bolt' && (!selectedTask || !selectedTask.trim())).
| let transport: "ssh" | "winrm" | "local" = "ssh"; | ||
| const connection = hostVars.ansible_connection as string | undefined; | ||
|
|
||
| if (connection === "winrm") { | ||
| transport = "winrm"; | ||
| } else if (connection === "local") { | ||
| transport = "local"; |
There was a problem hiding this comment.
The Node interface in bolt/types.ts supports "docker" transport (line 109), but the AnsibleService.getInventory() method only maps "ssh", "winrm", and "local" transports (lines 362-368). If Ansible is configured to use Docker as a connection type (ansible_connection=docker), it will default to "ssh" which is incorrect. Consider adding support for "docker" transport or explicitly handling unmapped connection types with a warning.
| let transport: "ssh" | "winrm" | "local" = "ssh"; | |
| const connection = hostVars.ansible_connection as string | undefined; | |
| if (connection === "winrm") { | |
| transport = "winrm"; | |
| } else if (connection === "local") { | |
| transport = "local"; | |
| let transport: "ssh" | "winrm" | "local" | "docker" = "ssh"; | |
| const connection = hostVars.ansible_connection as string | undefined; | |
| if (connection === "winrm") { | |
| transport = "winrm"; | |
| } else if (connection === "local") { | |
| transport = "local"; | |
| } else if (connection === "docker") { | |
| transport = "docker"; |
| Welcome to Pabawi Zero | ||
| </h1> | ||
| <p class="text-xl text-gray-600 dark:text-gray-400 max-w-2xl mx-auto"> | ||
| Puppet And Bolt Awesome Web Interface | ||
| Puppet Ansible Bolt Awesome Web Interface |
There was a problem hiding this comment.
The title changed from "Pabawi zero" to "Pabawi Zero" (capitalizing the Z). While this is a stylistic choice, ensure this is intentional and consistent with branding. The acronym expansion below was also updated from "Puppet And Bolt Awesome Web Interface" to "Puppet Ansible Bolt Awesome Web Interface". Note that this changes the meaning of the 'A' in PABAWI from "And" to "Ansible", which may confuse existing users. Consider whether this rebrand needs documentation or announcement.
- Replace loose equality checks with nullish coalescing operator for safer defaults - Change AnsibleService import to type-only import to reduce bundle size - Refactor package name parameter validation with explicit type checking - Remove unnecessary type assertion on ensure parameter - Eliminate redundant null checks before stdout/stderr event listeners - Replace string match with regex exec for more reliable facts extraction - Add comprehensive type guards for all Ansible facts properties - Improve facts mapping with explicit type validation for each field - Enhance memory and uptime facts extraction with proper type checking - Strengthen overall type safety throughout facts gathering pipeline
| @@ -141,6 +141,8 @@ export class PuppetRunHistoryService { | |||
| // Convert counts to RunHistoryData format | |||
| const history = this.convertCountsToHistory(counts, startDate, endDate); | |||
|
|
|||
| this.log(`Converted to ${String(history.length)} days of history`); | |||
|
|
|||
There was a problem hiding this comment.
These new info logs (Date range: ..., Converted to ... days) will run on every aggregated-history request and can add significant noise in production. Consider downgrading them to debug (or gating behind expert mode) so normal info logs stay high-signal.
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
| const result = await new Promise<{ stdout: string; success: boolean }>((resolve, reject) => { | ||
| const child = spawn("ansible", args, { | ||
| cwd: this.ansibleService.getAnsibleProjectPath(), | ||
| env: process.env, | ||
| }); |
There was a problem hiding this comment.
getNodeFacts() spawns an ansible process without any timeout/kill logic. If the SSH connection hangs or Ansible stalls, this promise can block indefinitely and leak a child process. Add a timeout similar to checkBinary()/AnsibleService.executeCommand() (kill the child on expiry) and reject with a clear error.
| const PlaybookExecutionBodySchema = z.object({ | ||
| playbookPath: z.string().min(1, "Playbook path is required"), | ||
| extraVars: z.record(z.unknown()).optional(), | ||
| expertMode: z.boolean().optional(), | ||
| tool: z.enum(["ansible"]).optional(), | ||
| }); | ||
|
|
||
| export function createPlaybooksRouter( | ||
| integrationManager: IntegrationManager, | ||
| executionRepository: ExecutionRepository, | ||
| streamingManager?: StreamingExecutionManager, | ||
| ): Router { | ||
| const router = Router(); | ||
| const logger = new LoggerService(); | ||
|
|
||
| router.post( | ||
| "/:id/playbook", | ||
| asyncHandler(async (req: Request, res: Response): Promise<void> => { | ||
| const startTime = Date.now(); | ||
| const expertModeService = new ExpertModeService(); | ||
| const requestId = req.id ?? expertModeService.generateRequestId(); | ||
|
|
||
| const debugInfo = req.expertMode | ||
| ? expertModeService.createDebugInfo("POST /api/nodes/:id/playbook", requestId, 0) | ||
| : null; | ||
|
|
||
| try { | ||
| const params = NodeIdParamSchema.parse(req.params); | ||
| const body = PlaybookExecutionBodySchema.parse(req.body); | ||
|
|
||
| const nodeId = params.id; | ||
| const playbookPath = body.playbookPath; | ||
| const extraVars = body.extraVars; | ||
| const expertMode = body.expertMode ?? false; | ||
|
|
||
| const ansibleTool = integrationManager.getExecutionTool("ansible"); | ||
| if (!ansibleTool) { | ||
| const errorResponse = { | ||
| error: { | ||
| code: "EXECUTION_TOOL_NOT_AVAILABLE", | ||
| message: "Ansible integration is not available", | ||
| }, | ||
| }; | ||
|
|
||
| res.status(503).json( | ||
| debugInfo ? expertModeService.attachDebugInfo(errorResponse, debugInfo) : errorResponse, | ||
| ); | ||
| return; | ||
| } | ||
|
|
||
| const aggregatedInventory = await integrationManager.getAggregatedInventory(); | ||
| const node = aggregatedInventory.nodes.find( | ||
| (n) => n.id === nodeId || n.name === nodeId, | ||
| ); | ||
|
|
||
| if (!node) { | ||
| const errorResponse = { | ||
| error: { | ||
| code: "INVALID_NODE_ID", | ||
| message: `Node '${nodeId}' not found in inventory`, | ||
| }, | ||
| }; | ||
|
|
||
| res.status(404).json( | ||
| debugInfo ? expertModeService.attachDebugInfo(errorResponse, debugInfo) : errorResponse, | ||
| ); | ||
| return; | ||
| } | ||
|
|
||
| const executionId = await executionRepository.create({ | ||
| type: "task", | ||
| targetNodes: [nodeId], | ||
| action: playbookPath, | ||
| parameters: { | ||
| playbook: true, | ||
| extraVars, | ||
| }, | ||
| status: "running", | ||
| startedAt: new Date().toISOString(), | ||
| results: [], | ||
| expertMode, | ||
| executionTool: "ansible", | ||
| }); | ||
|
|
||
| void (async (): Promise<void> => { | ||
| try { | ||
| const streamingCallback = streamingManager?.createStreamingCallback( | ||
| executionId, | ||
| expertMode, | ||
| ); | ||
|
|
||
| const result = await integrationManager.executeAction("ansible", { | ||
| type: "plan", | ||
| target: nodeId, | ||
| action: playbookPath, | ||
| parameters: { | ||
| extraVars, | ||
| }, | ||
| metadata: { | ||
| streamingCallback, | ||
| }, | ||
| }); | ||
|
|
There was a problem hiding this comment.
No automated tests were added for the new playbook execution endpoint. Since this introduces a new public API surface and async execution flow (execution record creation + streaming completion/error updates), add backend tests covering: 202 response with executionId, 503 when Ansible tool unavailable, 404 for unknown node, and successful completion updating the execution record.
| Puppet Ansible Bolt Awesome Web Interface | ||
| </p> |
There was a problem hiding this comment.
UI tagline text is missing punctuation/grammar ("Puppet Ansible Bolt Awesome Web Interface"). Consider adding commas/"and" for readability (e.g., "Puppet, Ansible, and Bolt Awesome Web Interface").
| // Get Class resources specifically | ||
| const classResources = resourcesByType.Class; | ||
|
|
||
| // Filter for Class resources and extract titles | ||
| const classes = catalog.resources | ||
| .filter(resource => resource.type === "Class") | ||
| .map(resource => resource.title.toLowerCase()); | ||
| this.log(`Found ${String(classResources.length)} Class resources for node: ${nodeId}`); | ||
|
|
||
| this.log(`Found ${String(classes.length)} classes in catalog for node: ${nodeId}`); | ||
| // Extract class titles and convert to lowercase | ||
| const classes = classResources.map(resource => resource.title.toLowerCase()); |
There was a problem hiding this comment.
getIncludedClasses() assumes resourcesByType.Class exists; when PuppetDB returns no Class key this becomes undefined, triggering an exception on .length/.map and forcing the catch-path. Default classResources to an empty array (e.g., resourcesByType.Class ?? []) so the no-classes case is handled without throwing and without misleading error logs.
| // Log all classes for debugging | ||
| if (classes.length > 0) { | ||
| const exampleClasses = classes.slice(0, 5).join(", "); | ||
| this.log(`Example classes: ${exampleClasses}`); | ||
| this.log(`All classes: ${classes.join(", ")}`); | ||
| } else { | ||
| this.log(`WARNING: No Class resources found. This may indicate the node has no catalog or no classes included.`); | ||
| } |
There was a problem hiding this comment.
Logging the full class list at info level (All classes: ...) can produce extremely large log lines on real catalogs, impacting log volume and readability. Consider logging only a count (and maybe a small sample) at info level, and put the full list behind debug logging (or omit entirely).
|
@alvagante I've opened a new pull request, #22, to work on those changes. Once the pull request is ready, I'll request review from you. |
|
@alvagante I've opened a new pull request, #23, to work on those changes. Once the pull request is ready, I'll request review from you. |
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Alessandro Franceschi <al@example42.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Alessandro Franceschi <al@example42.com>
|
@alvagante I've opened a new pull request, #24, to work on those changes. Once the pull request is ready, I'll request review from you. |
Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
| this.log(`Date range: ${startDate.toISOString()} to ${endDate.toISOString()}`, "debug"); | ||
|
|
There was a problem hiding this comment.
This new date-range log is emitted at info level on every aggregated history request. It’s likely too noisy for production logs; consider changing it to debug-level (or gating it behind expert mode) so normal operation doesn’t spam logs.
| this.log(`Converted to ${String(history.length)} days of history`, "debug"); | ||
|
|
There was a problem hiding this comment.
Same as above: Converted to X days of history is info-level and will be logged on every call. Consider lowering to debug-level or removing once the date-range fix is validated.
| "-m", | ||
| "package", | ||
| "-a", | ||
| this.toModuleArgString(moduleArgs), | ||
| ]; |
There was a problem hiding this comment.
For Ansible ad-hoc usage, -a module arguments for the package module are not JSON; Ansible expects a key=value string (or YAML-style args), so JSON.stringify(moduleArgs) is very likely to fail at runtime. Convert moduleArgs to the expected key=value format (and ensure values are properly quoted/escaped).
| // Ansible setup module returns JSON in stdout | ||
| const factsMatch = /"ansible_facts":\s*({[\s\S]*?})\s*}/.exec(result.stdout); | ||
| if (factsMatch) { | ||
| const ansibleFacts = JSON.parse(factsMatch[1]) as Record<string, unknown>; | ||
|
|
There was a problem hiding this comment.
The regex used to extract ansible_facts is non-brace-aware and will stop at the first } inside the facts object, which will make JSON.parse fail or produce truncated data for real-world nested facts. Instead, parse the full JSON object from Ansible output (e.g., extract the JSON payload after => and parse it, or force JSON stdout callback), then read ansible_facts from the parsed structure.
| // Ansible setup module returns JSON in stdout | |
| const factsMatch = /"ansible_facts":\s*({[\s\S]*?})\s*}/.exec(result.stdout); | |
| if (factsMatch) { | |
| const ansibleFacts = JSON.parse(factsMatch[1]) as Record<string, unknown>; | |
| // Ansible setup module returns JSON in stdout, but may contain nested objects, | |
| // so we need a brace-aware extraction of the ansible_facts object. | |
| const extractAnsibleFacts = (stdout: string): Record<string, unknown> | null => { | |
| const key = '"ansible_facts":'; | |
| const keyIndex = stdout.indexOf(key); | |
| if (keyIndex === -1) { | |
| return null; | |
| } | |
| const braceStart = stdout.indexOf("{", keyIndex + key.length); | |
| if (braceStart === -1) { | |
| return null; | |
| } | |
| let depth = 0; | |
| let inString = false; | |
| let escaped = false; | |
| for (let i = braceStart; i < stdout.length; i++) { | |
| const ch = stdout[i]; | |
| if (escaped) { | |
| escaped = false; | |
| continue; | |
| } | |
| if (ch === "\\") { | |
| escaped = true; | |
| continue; | |
| } | |
| if (ch === '"') { | |
| inString = !inString; | |
| continue; | |
| } | |
| if (inString) { | |
| continue; | |
| } | |
| if (ch === "{") { | |
| depth += 1; | |
| } else if (ch === "}") { | |
| depth -= 1; | |
| if (depth === 0) { | |
| const objectText = stdout.slice(braceStart, i + 1); | |
| try { | |
| const parsed = JSON.parse(objectText); | |
| if (parsed && typeof parsed === "object") { | |
| return parsed as Record<string, unknown>; | |
| } | |
| } catch { | |
| return null; | |
| } | |
| } | |
| } | |
| } | |
| return null; | |
| }; | |
| const ansibleFacts = extractAnsibleFacts(result.stdout); | |
| if (ansibleFacts) { |
| // Get Class resources specifically | ||
| const classResources = resourcesByType.Class; |
There was a problem hiding this comment.
resourcesByType.Class may be undefined when PuppetDB returns no Class resources, which would throw when accessing .length and break Hiera data for that node. Default this to an empty array (e.g., resourcesByType.Class ?? []) before using it.
| // Get Class resources specifically | |
| const classResources = resourcesByType.Class; | |
| // Get Class resources specifically, defaulting to an empty array if none are present | |
| const classResources = resourcesByType.Class ?? []; |
No description provided.